10 research outputs found
Recommended from our members
Reconfiguring Resilience for Existential Risk: Submission of Evidence to the Cabinet Office on the new UK National Resilience Strategy
This submission provides input on the UK Government's National Resilience Strategy Call for Evidence, which sought “public engagement to inform the development of a new Strategy that will outline an ambitious new vision for UK National Resilience and set objectives for achieving it.” In response, an interdisciplinary team of experts at the Centre for the Study of Existential Risk worked to prepare a concrete response to this call. In this document, we aim to share the contents of our submission for public deliberation.
While we laud the UK government's inititiative to develop a new National Resilience Strategy, we argue that more work can and should be done to categorize and identify catastrophic, and existential risks; we emphasize the importance of taking a long-term perspective on mitigating and responding to the challenges these pose; and we encourage the development of a more comprehensive strategy, as these risks are all intertwined in an interconnected and complex environment.
In our responses, we focus on the six broad thematic areas of the National Resilience Strategy (Risk and Resilience, Responsibilities and Accountability, Partnerships, Community, Investment, and Resilience in an Interconnected World), and provide key recommendations for improving UK national resilience, both from a general perspective on existential and global catastrophic risks, as well as with regards to policies in key risk domains such as in biorisk, climate risk, or emerging technologies within critical national infrastructure & - defence systems.
While we laud the UK government's initial to develop a new National Resilience Strategy, we argue that more work can and should be done to categorize and identify catastrophic, complex, and existential risks; we emphasize a long-term perspective on mitigating and responding to the threats these pose; and we encourage the development of a more comprehensive strategy, as these risks are all intertwined in an interconnected and complex environment.
In our responses, we focus on the six broad thematic areas of the National Resilience Strategy (Risk and Resilience, Responsibilities and Accountability, Partnerships, Community, Investment, and Resilience in an Interconnected World), and provide key recommendations for improving UK national resilience, both from a general perspective on existential and global catastrophic risks, as well as with regards to policies in key risk domains such as in biorisk, climate risk, or emerging technologies within critical national infrastructure & - defence systems
A solution scan of societal options to reduce transmission and spread of respiratory viruses: SARS-CoV-2 as a case study.
Societal biosecurity - measures built into everyday society to minimize risks from pests and diseases - is an important aspect of managing epidemics and pandemics. We aimed to identify societal options for reducing the transmission and spread of respiratory viruses. We used SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) as a case study to meet the immediate need to manage the COVID-19 pandemic and eventually transition to more normal societal conditions, and to catalog options for managing similar pandemics in the future. We used a 'solution scanning' approach. We read the literature; consulted psychology, public health, medical, and solution scanning experts; crowd-sourced options using social media; and collated comments on a preprint. Here, we present a list of 519 possible measures to reduce SARS-CoV-2 transmission and spread. We provide a long list of options for policymakers and businesses to consider when designing biosecurity plans to combat SARS-CoV-2 and similar pathogens in the future. We also developed an online application to help with this process. We encourage testing of actions, documentation of outcomes, revisions to the current list, and the addition of further options
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
This report surveys the landscape of potential security threats from
malicious uses of AI, and proposes ways to better forecast, prevent, and
mitigate these threats. After analyzing the ways in which AI may influence the
threat landscape in the digital, physical, and political domains, we make four
high-level recommendations for AI researchers and other stakeholders. We also
suggest several promising areas for further research that could expand the
portfolio of defenses, or make attacks less effective or harder to execute.
Finally, we discuss, but do not conclusively resolve, the long-term equilibrium
of attackers and defenders.Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI. The Future of Life Institute is acknowledged as a funder
Activism by the AI Community: Analysing Recent Achievements and Future Prospects
The artificial intelligence (AI) community has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI 'talent'. Both are crucial to the future of AI activism and worthy of sustained attention
Recommended from our members
Written Evidence - Defence industrial policy: procurement and prosperity
In this response we particularly focus on defence and those in adjacent markets systems that integrate increasingly capable artificial intelligence (AI), especially those based on machine learning (ML). Many systems that the Ministry of Defence (MoD) is likely to procure over the next 5-10 years will integrate AI and ML; these systems are likely to both be strategically important and to introduce new vulnerabilities. These vulnerabilities are likely to pose significant national security risks over the next few decades, both for the UK and the UK’s allies. These systems are the focus of much of our work, and where we hope to add our expertise to the Committee’s Inquiry. We make the following recommendations to protect against premature and/or unsafe procurement and deployment of ML-based systems:
- Improve systemic risk assessment in defence procurement.
- Ensure clear lines of responsibility so that senior officials can be held responsible for errors caused in the procurement chain and are therefore incentivised to reduce them;
- Acknowledge potential shifts in international standards for autonomous systems, and build flexible procurement standards accordingly.
- Update the MoD’s definition of lethal autonomous weapons - the Integrated Security, Defence and Foreign Policy Review provides an excellent opportunity to bring the UK in line with its allies
Filling gaps in trustworthy development of AI.
Incident sharing, auditing, and other concrete mechanisms could help verify the trustworthiness of actors
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
With the recent wave of progress in artificial intelligence (AI) has come a
growing awareness of the large-scale impacts of AI systems, and recognition
that existing regulations and norms in industry and academia are insufficient
to ensure responsible AI development. In order for AI developers to earn trust
from system users, customers, civil society, governments, and other
stakeholders that they are building AI responsibly, they will need to make
verifiable claims to which they can be held accountable. Those outside of a
given organization also need effective means of scrutinizing such claims. This
report suggests various steps that different stakeholders can take to improve
the verifiability of claims made about AI systems and their associated
development processes, with a focus on providing evidence about the safety,
security, fairness, and privacy protection of AI systems. We analyze ten
mechanisms for this purpose--spanning institutions, software, and hardware--and
make recommendations aimed at implementing, exploring, or improving those
mechanisms